On how various plans miss the hard bits of the alignment challenge
This post has been recorded as part of the LessWrong Curated Podcast, and can be listened to on Spotify, Apple Podcasts, and Libsyn.
(As usual, this post was written by Nate Soares with some help and editing from Rob Bensinger.)
In my last post, I described a “hard bit” of the challenge of aligning AGI—the sharp left turn that comes when your system slides into the “AGI” capabilities well, the fact that alignment doesn’t generalize similarly well at this turn, and the fact that this turn seems likely to break a bunch of your existing alignment properties.
Here, I want to briefly discuss a variety of current research proposals in the field, to explain why I think this problem is currently neglected.
I also want to mention research proposals that do strike me as having some promise, or that strike me as adjacent to promising approaches.
Before getting into that, let me be very explicit about three points:
On my model, solutions to how capabilities generalize further than alignment are necessary but not sufficient. There is dignity in attacking a variety of other real problems, and I endorse that practice.
The imaginary versions of people in the dialogs below are not the same as the people themselves. I’m probably misunderstanding the various proposals in important ways, and/or rounding them to stupider versions of themselves along some important dimensions.[1] If I’ve misrepresented your view, I apologize.
I do not subscribe to the Copenhagen interpretation of ethics wherein someone who takes a bad swing at the problem (or takes a swing at a different problem) is more culpable for civilization’s failure than someone who never takes a swing at all. Everyone whose plans I discuss below is highly commendable, laudable, and virtuous by my accounting.
Also, many of the plans I touch upon below are not being given the depth of response that I’d ideally be able to give them, and I apologize for not engaging with their authors in significantly more depth first. I’ll be especially cursory in my discussion of some MIRI researchers and research associates like Vanessa Kosoy and Scott Garrabrant.[2]
In this document I’m attempting to summarize my high-level view of the approaches I know about; I’m not attempting to provide full arguments for why I think particular approaches are more or less promising.
Think of the below as a window into my thought process, rather than an attempt to state or justify my entire background view. And obviously, if you disagree with my thoughts, I welcome objections.
So, without further ado, I’ll explain why I think that the larger field is basically not working on this particular hard problem:
Reactions to specific plans
Owen Cotton-Barratt & Truthful AI
Imaginary, possibly-mischaracterized-by-Nate version of Owen: What if we train our AGIs to be truthful? If our AGIs were generally truthful, we could just ask them if they’re plotting to be deceptive, and if so how to fix it, and we could do these things early in ways that help us nip the problems in the bud before they fester, and so on and so forth.
Even if that particular idea doesn’t work, it seems like our lives are a lot easier insofar as the AGI is truthful.
Nate: “Truthfulness” sure does sound like a nice property for our AGIs to have. But how do you get it in there? And how do you keep it in there, after that sharp left turn? If this idea is to make any progress on the hard problem we’re discussing, it would have to come from some property of “truthfulness” that makes it more likely than other desirable properties to survive the great generalization of capabilities.
Like, even simpler than the problem of an AGI that puts two identical strawberries on a plate and does nothing else, is the problem of an AGI that turns as much of the universe as possible into diamonds. This is easier because, while it still requires that we have some way to direct the system towards a concept of our choosing, we no longer require corrigibility. (Also, “diamond” is a significantly simpler concept than “strawberry” and “cellularly identical”.)
It seems to me that we have basically no idea how to do this. We can train the AGI to be pretty good at building diamond-like things across a lot of training environments, but once it takes that sharp left turn, by default, it will wander off and do some other thing, like how humans wandered off and invented birth control.
In my book, solving this hard problem so well that we could feasibly get an AGI that predictably maximizes diamond (after its capabilities start generalizing hard), would constitute an enormous advance.
Solving the hard problem so well that we could feasibly get an AGI that predictably answers operator questions truthfully, would constitute a similarly enormous advance. Because we would have figured out how to keep a highly capable system directed at any one thing of our choosing.
Now, in real life, building a truthful AGI is much harder than building a diamond optimizer, because ‘truth’ is a concept that’s much more fraught than ‘diamond’. (To see this, observe that the definition of “truth” routes through tricky concepts like “ways the AI communicated with the operators” and “the mental state of the operators”, and involves grappling with tricky questions like “what ways of translating the AI’s foreign concepts into human concepts count as manipulative?” and “what can be honestly elided?”, and so on, whereas diamond is just carbon atoms bound covalently in tetrahedral lattices.)
So as far as I can tell, from the perspective of this hard problem, Owen’s proposal boils down to “Wouldn’t it be nice if the tricky problems were solved, and we managed to successfully direct our AGIs to be truthful?” Well, sure, that would be nice, but it’s not helping solve our problem. In fact, this problem subsumes the whole diamond maximizer problem, but replaces the concept of “diamond” (that we obviously can’t yet direct an AGI to optimize, diamond more clearly being a physical phenomenon far removed from the AGI’s raw sensory inputs) with the concept of “truth” (which is abstract enough that we can easily forget that it’s a much more difficult-to-describe physical phenomenon far removed from the AGI’s raw sensory inputs).
(And this hasn’t even touched on how if you did manage to get an AGI that keeps optimizing for truth-telling after it falls into the capabilities well, then you still have to make it corrigible, on pain of extinction. But I digress.)
Maybe Owen does think that the goal of “tell the truth” generalizes more readily than “optimize diamond”, for some reason? But if so, I haven’t seen him mention it, except when I explicitly prompted him when having this argument in person. Which doesn’t seem like a very promising sign to me. My read is that he’s just failing to engage with this hard problem.
Ryan Greenblatt & Eliciting Latent Knowledge
Imaginary Ryan, because Paul was on vacation last time I tried to understand ELK: As we train the nascent AI up to an AGI, we’re going to simultaneously train a second system (the “ELK head”) that can elicit its latent knowledge. The latter system will be able to tell us the moment that the former system is starting to do things like contemplate deception. We’re then going to combine this with some third adversarial system, which can use that latent knowledge to expose the bad behavior to gradients, so that we can nip it in the bud.
Basically all this requires is that, whatever intelligence gradient descent (or some other optimizing procedure) can create, it can also decode, which doesn’t seem all that big an ask: whatever optimization method you were using that found this intelligent reasoner, there should be a dual optimization process that finds its decoder, if you’re a little clever about it.
Nate: So, set aside questions of whether that sort of thing (with the separate knowledge-elicitor and adversary) will turn out to be practical, because we’re talking about whether the plan attacks the key problem.
Suppose we have an ELK head, and an adversary, that we’re using to train a nascent AGI. The AGI takes a sharp left turn. Now it’s dangerously capable, and suddenly a bunch of misalignments which it previously didn’t notice within itself are relevant. And you’ve got to get them out.
Your first problem is that the recent capabilities gains made by the AGI might not have come from gradient descent (much like how humans’ sudden explosion of technological knowledge accumulated in our culture rather than our genes, once we turned the corner). You might not be able to just “expose the bad behavior” to gradients that you can hit to correct the thing, at least not easily and quickly.
Your second problem is that the AGI’s concepts might rapidly get totally uninterpretable to your ELK head. Like, you could imagine doing neuroimaging on your mammals all the way through the evolution process. They’ve got some hunger instincts in there, but it’s not like they’re smart enough yet to represent the concept of “inclusive genetic fitness” correctly, so you figure you’ll just fix it when they get capable enough to understand the alternative (of eating because it’s instrumentally useful for procreation). And so far you’re doing great: you’ve basically decoded the visual cortex, and have a pretty decent understanding of what it’s visualizing.
You maintain this pretty well as it walks up through to primate, and then suddenly it takes a sharp left turn and invents its own internal language and a bunch of abstract concepts, and suddenly you find your visualization tools to be quite lacking for interpreting its abstract mathematical reasoning about topology or whatever.
Analogously, your ELK head’s abilities are liable to fall off a cliff right as the AGI’s capabilities start generalizing way outside of its training distribution.
And if they don’t, then this ELK head is (in this hypothetical) able to decode and understand the workings of an alien mind. Likely a kludgey behemoth of an alien mind. This itself is liable to require quite a lot of capability, quite plausibly of the sort that humanity gets first from the systems that took sharp left-turns, rather than systems that ground along today’s scaling curves until they scaled that far.
Or in other words, if your ELK head does keep pace with your AGI, and takes a sharp left turn at the same time as it… then, well, now you’re basically back to the “Truthful AI” proposal. How do you keep your ELK head reporting accurately (and doing so corrigibly), as it undergoes that sharp left turn?
This proposal seems to me like it’s implicitly assuming that most of the capabilities gains come from the slow grind of gradient descent, in a world where the systems don’t take sharp left turns and rapidly become highly capable in a wide variety of new (out-of-distribution) domains.
Which seems to me that it’s mostly just assuming its way out from under the hard problem—and thus, on my models, assuming its way clean out of reality.
And if I imagine attempting to apply this plan inside of the reality I think I live in, I don’t see how it plans to address the hard part of the problem, beyond saying “try training it against places where it knows it’s diverging from the goal before the sharp turn, and then hope that it generalizes well or won’t fight back”, which doesn’t instill a bunch of confidence in me (and which I don’t expect to work).
Eric Drexler & AI Services
Imaginary Eric: Well, sure, AGI could get real dangerous if you let one system do everything under one umbrella. But that’s not how good engineers engineer things. You can and should split your AI systems into siloed services, each of which can usefully help humanity with some fragment of whichever difficult sociopolitical or physical challenge you’re hoping to tackle, but none of which constitutes an adversarial optimizer (with goals over the future) in its own right.
Nate: So mostly I expect that, if you try to split these systems into services, then you either fail to capture the heart of intelligence and your siloed AIs are irrelevant, or you wind up with enough AGI in one of your siloes that you have a whole alignment problem (hard parts and all) in there.
Like, I see this plan as basically saying “yep, that hard problem is in fact too hard, let’s try to dodge it, by having humans + narrow AI services perform the pivotal act”. Setting aside how I don’t particularly expect this to work, we can at least hopefully agree that it’s attempting to route around the problems that seem to me to be central, rather than attempting to solve them.
Evan Hubinger, in a recent personal conversation
Imaginary Evan: It’s hard, in the modern paradigm, to separate the system’s values from its capabilities and from the way it was trained. All we need to do is find a training regimen that leads to AIs that are both capable and aligned. At which point we can just make it publicly available, because it’s not like people will be trying to disalign their AIs.
Nate: So, first of all, you haven’t exactly made the problem easier.
As best I can tell, this plan amounts to “find a training method that not only can keep a system aligned through the sharp left turn, but must, and then popularize it”. Which has, like, bolted two additional steps atop an assumed solution to some hard problems. So this proposal does not seem, to me, to make any progress towards solving those hard problems.
(Also, the observation “capabilities and alignment are fairly tightly coupled in the modern paradigm” doesn’t seem to me like much of an argument that they’re going to stay coupled after the ol’ left turn. Indeed, I expect they won’t stay coupled in the ways you want them to. Assuming that this modern desirable property will hold indefinitely seems dangerously close to just assuming this hard problem away, and thus assuming your way clean out of what-I-believe-to-be-reality.)
But maybe I just don’t understand this proposal yet (and I have had some trouble distilling things I recognize as plans out of Evan’s writing, so far).
A fairly straw version of someone with technical intuitions like Richard Ngo’s or Rohin Shah’s
Imaginary Richard/Rohin: You seem awfully confident in this sharp left turn thing. And that the goals it was trained for won’t just generalize. This seems characteristically overconfident. For instance, observe that natural selection didn’t try to get the inner optimizer to be aligned with inclusive genetic fitness at all. For all we know, a small amount of cleverness in exposing inner-misaligned behavior to the gradients will just be enough to fix the problem. And even if not that-exact-thing, then there are all sorts of ways that some other thing could come out of left field and just render the problem easy. So I don’t see why you’re worried.
Nate: My model says that the hard problem rears its ugly head by default, in a pretty robust way. Clever ideas might suffice to subvert the hard problem (though my guess is that we need something more like understanding and mastery, rather than just a few clever ideas). I have considered an array of clever ideas that look to me like they would predictably-to-me fail to solve the problems, and I admit that my guess is that you’re putting most of your hope on small clever ideas that I can already see would fail. But perhaps you have ideas that I do not. Do you yourself have any specific ideas for tackling the hard problem?
Imaginary Richard/Rohin: Train it, while being aware of inner alignment issues, and hope for the best.
Nate: That doesn’t seem to me to even start to engage with the issue where the capabilities fall into an attractor and the alignment doesn’t.
Perhaps sometime we can both make a list of ways to train with inner alignment issues in mind, and then share them with each other, so that you can see whether you think I’m lacking awareness of some important tool you expect to be at our disposal, and so that I can go down your list and rattle off the reasons why the proposed training tools don’t look to me like they result in alignment that is robust to sharp left turns. (Or find one that surprises me, and update.) But I don’t want to delay this post any longer, so, some other time, maybe.
Another recent proposal
Imaginary Anonymous-Person-Whose-Name-I’ve-Misplaced: Okay, but maybe there is a pretty wide attractor basin around my own values, though. Like, maybe not my true values, but around a bunch of stuff like being low-impact and deferring to the operators about what to do and so on. You don’t need to be all that smart, nor have a particularly detailed understanding of the subtleties of ethics, to figure out that it’s bad (according to me) to kill all humans.
Nate: Yeah, that’s basically the idea behind corrigibility, and is one reason why corrigibility is plausibly a lot easier to get than a full-fledged CEV sovereign. But this observation doesn’t really engage with the question of how to point the AGI towards that concept, and how to cause its behavior to be governed by that concept in a fashion that’s robust to the sharp left turn where capabilities start to really generalize.
Like, yes, some directions are easier to point an AI in, on account of the direction itself being simpler to conceptualize, but that observation alone doesn’t say anything about how to determine which direction an AI is pointing after it falls into the capabilities well.
More generally, saying “maybe it’s easy” is not the same as solving the problem. Maybe it is easy! But it’s not going to get solved unless we have people trying to solve it.
Vivek Hebbar, summarized (perhaps poorly) from last time we spoke of this in person
Imaginary Vivek: Hold on, the AGI is being taught about what I value every time it tries something and gets a gradient about how well that promotes the thing I value. At least, assuming for the moment that we have a good ability to evaluate the goodness of the consequences of a given action (which seems fair, because it sounds like you’re arguing for a way that we’d be screwed even if we had the One True Objective Function).
Like, you said that all aspects of reality are whispering to the nascent AGI of what it means to optimize, but few parts of reality are whispering of what to optimize for—whereas it looks to me like every gradient the AGI gets is whispering a little bit of both. So in particular, it seems to me like if you did have the one true objective function, you could just train good and hard until the system was both capable and aligned.
Nate: This seems to me like it’s implicitly assuming that all of the system’s cognitive gains come from the training. Like, with every gradient step, we are dragging the system one iota closer to being capable, and also one iota closer to being good, or something like that.
To which I say: I expect many of the cognitive gains to come from elsewhere, much as a huge number of the modern capabilities of humans are encoded in their culture and their textbooks rather than in their genomes. Because there are slopes in capabilities-space that an intelligence can snowball down, picking up lots of cognitive gains, but not alignment, along the way.
Assuming that this is not so, seems to me like simply assuming this hard problem away.
And maybe you simply don’t believe that it’s a real problem; that’s fine, and I’d be interested to hear why you think that. But I have not yet heard a proposed solution, as opposed to an objection to the existence of the problem in the first place.
John Wentworth & Natural Abstractions
Imaginary John: I suspect there’s a common format to concepts, that is a fairly objective fact about the math of the territory, and that—if mastered—could be used to understand an AGI’s concepts. And perhaps select the ones we wish it would optimize for. Which isn’t the whole problem, but sure is a big chunk of the problem. (And other chunks might well be easier to address given mastery of the fairly-objective concepts of “agent” and “optimizer” and so on.)
Nate: This does seem to me like it’s trying to attack the actual problem! I have my doubts about this particular line of research (and those doubts are on my list of things to write up), but hooray for a proposal that, if it succeeded by its own lights, would address this hard problem!
Imaginary John: Well, uh, these days I’m mostly focusing on using my flimsy non-mastered grasp of the common-concept format to try to give a descriptive account of human values, because for some reason that’s where I think the hope is. So I’m not actually working too much on this thing that you think takes a swing at the real problem (although I do flirt with it occasionally).
Nate: :’(
Imaginary John: Look, I didn’t want to break the streak, OK.
Rob Bensinger, reading this draft: Wait, why do you see John’s proposal as attacking the central problem but not, for example, Eric Drexler’s Language for Intelligent Machines (summarized here)?
Nate: I understand Eric to be saying “maybe humans deploying narrow AIs will be capable enough to end the acute risk period before an AGI can (in which case we can avoid ever using AIs that have taken sharp left turns)”, whereas John is saying “maybe a lot of objective facts about the territory determine which concepts are useful, and by understanding the objectivity of concepts we can become able to understand even an alien mind’s concepts”.
I think John’s guess is wrong (at least in the second clause), but it seems aimed at taking an AI system that has snowballed down a capabilities slope in the way that humans snowballed, and identifying its concepts in a way that’s stable to changes in the AI’s ontology—which is step one in the larger challenge of figuring out how to robustly direct an AGI’s motivations at the content of a particular concept it has.
My understanding of Eric’s idea, in contrast, is “I think there’s a language these siloed components could use that’s not so expressive as to allow them to be dangerous, but is expressive enough to allow them to help humans.” To which my basic reply is roughly “The problem is that the non-siloed systems are going to start snowballing and end the world before the human+silo systems can save the world.” As far as I can tell, Eric’s attempting to route around the problem, whereas John’s attempting to solve it.[3]
Neel Nanda & Theories of Impact for Interpretability
Imaginary Neel: What if we get a lot of interpretability?
Nate: That would be great, and I endorse developing such tools.
I think this will only solve the hard problems if the field succeeds at interpretability so wildly that (a) our interpretability tools continue to work on fairly difficult concepts in a post-left-turn AGI; (b) that AGI has an architecture that turns out to be especially amenable to being aimed at some concept of our choosing; and (c) the interpretability tools grant us such a deep understanding of this alien mind that we can aim it using that understanding.
I admit I’m skeptical of all three. Where, to be clear, better interpretability tools help put us in a better position even if they don’t clear these lofty bars. In real life, I expect interpretability to play a smaller role as a force-multiplier that awaits some other plan for addressing the hard problems.
Which are great to have and worth building, to be clear. I full-throatedly endorse humanity putting more effort into interpretability.
It simultaneously doesn’t look to me like people are seriously aiming for “develop such a good ability to understand minds that we can reshape/rebuild them to be aimable in whatever time we have after we get one”. It looks to me like the sights are currently set at much lower and more achievable targets, and that current progress is consistent with never hitting the more ambitious targets, the ones that would let us understand and reshape the first artificial minds into something aligned (fast enough to be relevant).
But if some ambitious interpretability researchers do set their sights on the sharp left turn and the generalization problem, then I would indeed count this as a real effort by humanity to solve its central technical challenge. I don’t need a lot of hope in a specific research program in order to be satisfied with the field’s allocation of resources; I just want to grow the space of attempts to solve the generalization problem at all.
Stuart Armstrong & Concept Extrapolation
Nate: (Note: This section consists of actual quotes and dialog, unlike the others.)[4]
Stuart, in a blog post:
[...] It is easy to point at current examples of agents with low (or high) impact, at safe (or dangerous) suggestions, at low (or high) powered behaviours. So we have in a sense the ‘training sets’ for defining low-impact/Oracles/low-powered AIs.
It’s extending these examples to the general situation that fails: definitions which cleanly divide the training set (whether produced by algorithms or humans) fail to extend to the general situation. Call this the ‘value extrapolation problem, with ‘value’ interpreted broadly as a categorisation of situations into desirable and undesirable.
[...] Value extrapolation is thus necessary for AI alignment.
[...] We think that once humanity builds its first AGI, superintelligence is likely near, leaving little time to develop AI safety at that point. Indeed, it may be necessary that the first AGI start off aligned: we may not have the time or resources to convince its developers to retrofit alignment to it. So we need a way to have alignment deployed throughout the algorithmic world before anyone develops AGI.
To do this, we’ll start by offering alignment as a service for more limited AIs. Value extrapolation scales down as well as up: companies value algorithms that won’t immediately misbehave in new situations, algorithms that will become conservative and ask for guidance when facing ambiguity.
We will get this service into widespread use (a process that may take some time), and gradually upgrade it to a full alignment process. [...]
Rob Bensinger, replying on Twitter: The basic idea in that post seems to be: let’s make it an industry standard for AI systems to “become conservative and ask for guidance when facing ambiguity”, and gradually improve the standard from there as we figure out more alignment stuff.
The reasoning being something like: once we have AGI, we need to have deployment-ready aligned AGI extremely soon; and this will be more possible if the non-AGI preceding it is largely aligned.
(I at least agree with the “once we have AGI, we’ll need deployment-ready aligned AGI extremely soon” part of this.)
The other aspect of your plan seems to be ‘focus on improving value extrapolation methods’. Both aspects of this plan seem very bad to me, speaking from my inside view:
1a. I don’t expect that much overlap between what’s needed to make, e.g., a present-day image classifier more conservative, and what’s needed to make an AGI reliable and safe. So redirecting resources from the latter problem to the former seems wasteful to me.
1b. Relatedly, I don’t think it’s helpful for the field to absorb the message “oh, yeah, our image classifiers and Go players and so on are aligned, we’re knocking that problem out of the park”. If 1a is right, then making your image classifier conservative doesn’t represent much progress toward being able to align AGI. They’re different problems, like building a safe bridge vs. building a safe elevator.
‘Alignment’ is currently a word that’s about the AGI problem in particular, which overlaps with a lot of narrow-AI robustness problems, but isn’t just a scaled-up version of those; the difficulty of AGI alignment mostly comes from qualitatively new risks. So ‘aligning’ the field as a whole doesn’t necessarily help much, and (less importantly) using the term ‘alignment’ for the broader, fuzzier goal is liable to distract from the core difficulties, and liable to engender a false sense of progress on the original problem.
2. We need to do value extrapolation eventually, but I don’t think this is the field’s current big bottleneck, and I don’t think it helps address the bottleneck. Rather, I think the big bottleneck is understandability / interpretability.
Nate: I like Rob’s response. I’ll add that I’m not sure I understand your proposal. Your previous name for the value extrapolation problem was the “model splintering” problem, and iirc you endorsed Rohin’s summary of model splintering:
[Model splintering] is one way of more formally looking at the out-of-distribution problem in machine learning: instead of simply saying that we are out of distribution, we look at the model that the AI previously had, and see what model it transitions to in the new distribution, and analyze this transition.
Model splintering in particular refers to the phenomenon where a coarse-grained model is “splintered” into a more fine-grained model, with a one-to-many mapping between the environments that the coarse-grained model can distinguish between and the environments that the fine-grained model can distinguish between (this is what it means to be more fine-grained).
On the surface, work aimed at understanding and addressing “model splintering” sounds potentially promising to me—like, I might want to classify some version of “concept extrapolation” alongside Natural Abstractions, certain approaches to interpretability, Vanessa’s work, Scott’s work, etc. as “an angle of attack that might genuinely help with the core problem, if it succeeded wildly more than I expect it to succeed”. Which is about as positive a situation as I’m expecting right now, and would be high praise in my books.
But in the past, I’ve often heard you use words and phrases in ways that I find promising at a glance, to mean things that I end up finding much less promising when I dig in on the specifics of what you’re talking about. So I’m initially skeptical, especially insofar as I don’t understand your proposal well.
I’d be interested in hearing how you think your proposal addresses the sharp left turn, if you think it does; or maybe you can give me pointers toward particular paragraphs/sections you’ve written up that you think already speak to this problem.
Regarding work on image-classifier conservatism: at a first glance, I don’t have much confidence that the types of generalization you’re shooting for are tracking the possibility of sharp left turns. “We want our solutions to generalize” is cheap to say; things that engage with the sharp left turn are more expensive. What’s an example of a kind of present-day research on image classifier conservatism that you’d expect to help with the sharp left turn (if you do think any would help)?
Rebecca Gorman, in an email thread: We’re working towards something that achieves interpretability objectives, and does so better than current approaches.
Agreed that AGI alignment isn’t just a scaled-up version of narrow-AI robustness problems. But if we need to establish the foundations of alignment before we reach AGI and build it into every AI being built today (since we don’t know when and where superintelligence will arise), then we need to try to scale down the alignment problem to something we can start to research today.
As for the article [A central AI alignment problem: capabilities generalization, and the sharp left turn]: I think it’s an excellent article, but I’ll give an insufficient response. I agree that capabilities form an attractor well. And that we don’t get a strong understanding of human values as easily. That’s why we think it’s important to invest energy and resources into giving AI a strong understanding of human values; it’s probably a harder problem. But—at a high level, some of the methods for getting there may generalize. That, at least, is a hopeful statement.
Nate: That sounds like a laudable goal. I have not yet managed to understand what sort of foundations of alignment you’re trying to scale down and build into modern systems. What are you hoping to build into modern systems, and how do you expect it to relate to the problem of aligning systems with capabilities that generalize far outside of training?
So far, from parts of the aforementioned email thread that have been elided in this dialog, I have not yet managed to extract a plan beyond “generate training data that helps things like modern image classifiers distinguish intended features (such as ‘pre-treatment collapsed lung’ from ‘post-treatment collapsed lung with chest drains installed’, despite the chest-drains being easier to detect than the collapse itself)”, and I don’t yet see how generating this sort of training data and training modern image-classifiers thereon addresses the tricky alignment challenges I worry about.
Stuart, in an email thread: In simple or typical environments, simple proxies can achieve desired goals. Thus AIs tend to learn simple proxies, either directly (programmers write down what they currently think the goal is, leaving important pieces out) or indirectly (a simple proxy fits the training data they receive—eg image classifiers focusing on spurious correlations).
Then the AI develops a more complicated world model, either because the AI is becoming smarter or because the environment changes by itself. At this point, by the usual Goodhart arguments, the simple proxy no longer encodes desired goals, and can be actively pernicious.
What we’re trying to do is to ensure that, when the AI transitions to a different world model, this updates its reward function at the same time. Capability increases should lead immediately to alignment increases (or at least alignment changes); this is the whole model splintering/value extrapolation approach.
The benchmark we published is a much-simplified example of this: the “typical environment” is the labeled datasets where facial expression and text are fully correlated. The “simple proxy/simple reward function” is the labeling of these images. The “more complicated world model” is the unlabeled data that the algorithm encounters, which includes images where the expression feature and the text feature are uncorrelated. The “alignment increase” (or, at least, the first step of this) is the algorithm realising that there are multiple distinct features in its “world model” (the unlabeled images) that could explain the labels, and thus generating multiple candidates for its “reward function”.
One valid question worth asking is why we focused on image classification in a rather narrow toy example. The answer is that, after many years of work in this area, we’ve concluded that the key insights in extending reward functions do not lie in high-level philosophy, mathematics, or modelling. These have been useful, but have (temporarily?) run their course. Instead, practical experiments in value extrapolation seem necessary—and these will ultimately generate theoretical insights. Indeed, this has already happened; we now have, I believe, a much better understanding of model splintering than before we started working on this.
As a minor example, this approach seems to generate a new form of interpretability. When the algorithm asks the human to label a “smiling face with SAD written on it”, it doesn’t have a deep understanding of either expression or text; nor do humans have an understanding of what features it is really using. Nevertheless, seeing the ambiguous image gives us direct insight into the “reward functions” it is comparing, a potential new form of interpretability. There are other novel theoretical insights which we’ve been discussing in the company, but they’re not yet written up for public presentation.
We’re planning to generalise the approach and insights from image classifiers to other agent designs (RL agents, recommender systems, language models...); this will generate more insights and understanding on how value extrapolation works in general.
Nate: In Nate-speak, the main thing I took away from what you’ve said is “I want alignment to generalize when capabilities generalize. Also, we’re hoping to get modern image classifiers to ask for labels on ambiguous data.”
“Get the AI to ask for labels on ambiguous data” is one of many ideas I’d put on a list of shallow alignment ideas that are worth implementing. To my eye, it doesn’t seem particularly related to the problem of pointing an AGI at something in a way that’s robust to capabilities-start-generalizing.
It’s a fine simple tool to use to help point at the concept you were hoping to point at, if you can get an AGI to do the thing you’re pointing toward at all, and it would be embarrassing if we didn’t try it. And I’m happy to have people trying early versions of such things as soon as possible. But I don’t see these sorts of things as shedding much light on how you get a post-left-turn AGI to optimize for some concept of your choosing in the first place. If you could do that, then sure, getting it to ask for clarification when the training data is ambiguous is a nice extra saving throw (if it wasn’t already doing that automatically because of some deeper corrigibility success), but I don’t currently see this sort of thing as attacking one of the core issues.[5]
Andrew Critch & political solutions
Imaginary Andrew Critch: Just politick between the AGI teams and get them all to agree to take the problem seriously, not race, not cut corners on safety, etc.
Nate: Uh, that ship sailed in, like, late 2015. My fairly-strong impression, from my proximity to the current politics between the current orgs, is “nope”.
Also, even if this wasn’t a straight-up “nope”, you have the question of what you do with your cooperation. Somehow you’ve still got to leverage this cooperation into the end of the acute risk period, before the people outside your alliance end the world. And this involves having a leadership structure that can distinguish bad plans from good ones.
The alliance helps, for sure. It takes a bunch of the time pressure off (assuming your management is legibly capable of distinguishing good deployment ideas from bad ones). I endorse attempts to form such an alliance. (And it sure would be undignified for our world to die of antitrust law at the final extremity.) But it’s not an attempt to solve this hard technical problem, and it doesn’t alleviate enough pressure to cause me to think that the problem would eventually be solved, in this field where ~nobody manages to strike for the heart of the problem before them.
Imaginary Andrew Critch: So get global coordination going! Or have some major nation-state regulate global use of AI, in some legitimate way!
Nate: Here I basically have the same response: First, can’t be done (though I endorse attempts to prove me wrong, and recommend practicing by trying to effect important political change on smaller-stakes challenges ASAP (The time is ripe for sweeping global coordination in pandemic preparedness! We just had our warning shot! If we’ll be able to do something about AGI later, presumably we can do something analogous about pandemics now!)).
Second, it doesn’t alleviate enough pressure; the bureaucrats can’t tell real solutions from bad ones; the cost to build an unaligned AGI drops each year; etc., etc. Sufficiently good global coordination is a win condition, but we’re not anywhere close to on track for that, and in real life we’re still going to need technical solutions.
Which, apparently, only a handful of people in the world are trying to provide.
What about superbabies?
Nate: I doubt we have the time, but sure, go for superbabies. It’s as dignified as any of the other attempts to walk around this hard problem.
What about other MIRI people?
There are a few people supported at least in part by MIRI (such as Scott and Vanessa) who seem to me to have identified confusing and poorly-understood aspects of cognition. And their targets strike me as the sort of things where if we got less confused about what the heck was going on, then we might thereby achieve a somewhat better understanding of minds/optimization/etc., in a way that sheds some light on the hard problems. So yeah, I’d chalk a few other MIRI-supported folk up in the “trying to tackle the hard problems” column.
We still wouldn’t have anything close to a full understanding, and at the progress rate of the last decade, I’d expect it to take a century for research directions like these to actually get us to an understanding of minds sufficient to align them.
Maybe early breakthroughs chain into follow-up breakthroughs that shorten that time? Or maybe if you have fifty people trying that sort of thing, instead of 3–6, one of them ends up tugging on a thread that unravels the whole knot if they manage to succeed in time. It seems good to me that researchers are trying approaches like these, but the existence of a handful of people making such an attempt doesn’t seem to me to represent much of an update about humanity’s odds of survival.
High-level view
I again stress that all the people whose plans I am pessimistic about are people that I consider virtuous, and whose efforts I applaud. (And that my characterizations of people above are probably not endorsed by those people, and that I’m putting less effort into passing their ideological Turing Tests than would be virtuous of me, etc. etc.)
Nevertheless, my overall impression is that most of the new people coming into alignment research end up pursuing research that seems doomed to me, not just because they’re unlikely to succeed at their stated research goals, but because their stated research goals have little overlap with what seem to me to be the tricky bits. Or, well, that’s what happens at best; what happens at worst is they wind up doing capabilities work with a thin veneer of alignment research.
Perhaps unfairly, my subjective experience of people entering the alignment research field is that there are:
a bunch of plans like Owen’s (that seem to me to just completely miss the problem),
and a bunch of people who study some local phenomenon of modern systems that seems to me to have little relationship to the difficult problems that I expect to arise once things start getting serious, while calling that “alignment” (thus watering down the term, and allowing them to convince themselves that alignment is actually easy because it’s just as easy to train a language model to answer “morality” questions as it is to train it to explain jokes or whatever),
and a few people who do capabilities work so that they can “stay near the action”,
and very few who are taking stabs at the hard problems.
An exception is interpretability work, which I endorse, and which I think is getting rightful efforts (though I will caveat that some grim part of me expects that somehow interpretability work will be used to boost capabilities long before it gets to the high level required to face down the tricky problems I expect in the late game). And there are definitely a handful of folk plugging away at research proposals that seem to me to have non-trivial inner product with the tricky problems.
In fact, when writing this list, I was slightly pleasantly surprised by how many of the research directions seem to me to have non-trivial inner product with the tricky problems.[6]
This isn’t as much of a positive update as it might first seem, on account of how it looks to me like the total effort in the field is not distributed evenly across all the above proposals, and I still have a general sense that most researchers aren’t really asking questions whose answers would really help us out. But it is something of a positive update nevertheless.
Returning to one of the better-by-my-lights proposals from above, Natural Abstractions: If this agenda succeeded and was correct in a key hypothesis, this would directly solve a big chunk of the problem.
I don’t buy the key hypothesis (in the relevant way), and I don’t expect that agenda to succeed.[7] But if I was saying that about a hundred pretty-uncorrelated agendas being pursued by two hundred people, I’d start to think that maybe the odds are in our favor.
My overall impression is still that when I actually look at the particular community we have, weighted by person-hours, the large majority of the field isn’t trying to solve the problem(s) I expect to kill us. They’re just wandering off in some other direction.
It could turn out that I’m wrong about one of these other directions. But “turns out the hard/deep problem I thought I could see, did not in fact exist” feels a lot less likely, on my own models, than “one of these 100 people, whose research would clearly solve the problem if it achieved its self-professed goals, might in fact be able to achieve their goals (despite me not sharing their research intuitions)”.
So the status quo looks grim to me.
I in fact think it’s nice to have some people saying “we can totally route around that problem”, and then pursuing research paths that they think route around the problem!
But currently, we have only a few fractions of plans that look to me to be trying to solve the problem that I expect to actually kill us. Like a field of contingency plans with no work going into a Plan A; or like a field of pandemic preparedness that immediately turned its gaze away from the true disaster scenarios and focused the vast majority of its effort on ideas like “get people to eat healthier so that their immune systems will be better-prepared”. (Not a perfect analogy; sorry.)
Hence: I’m not highly-pessimistic about our prospects because I think this problem is extraordinarily hard. I think this problem is normally hard, and very little effort is being deployed toward solving it.
Like, you know how some people out there (who I’m reluctant to name for fear that reminding them of their old stances will contribute to fixing them in their old ways) are like, “Your mistake was attempting to put a goal into the AGI; what you actually need to do is keep your hands off it and raise it compassionately!”? And from our perspective, they’re just walking blindly into the razor blades?
And then other people are like, “The problem is giving the AGI a bad goal, or letting bad people control it”, and… well, that’s probably still where some of you get off the train, but to the rest of us, these people also look like they’re walking willfully into the razor blades?
Well, from my perspective, the people who are like, “Just keep training it on your objective while being somewhat clever about the training, maybe that empirically works”, are also walking directly into the razor blades.
(And it doesn’t help that a bunch of folks are like “Well, if you’re right, then we’ll be able to update later, when we observe that getting language models to answer ethical questions is mysteriously trickier than getting it to answer other sorts of questions”, apparently impervious to my cries of “No, my model does not predict that, my model does not predict that we get all that much more advance evidence than we’ve got already”. If the evidence we have isn’t enough to get people focused on the central problems, then we seem to me to be in rather a lot of trouble.)
My current prophecy is not so much “death by problem too hard” as “death by problem not assailed”.
Which is absolutely a challenge. I’d love to see more people attacking the things that seem to me like they’re at the core.
- ^
I ran a few of the dialogs past the relevant people, but that has empirically dragged out the amount of time it takes this post to publish, and I have a handful of other posts to publish afterwards, so I neglected to get feedback from most of the people mentioned. Sorry.
- ^
Much of Vanessa, Scott, etc.’s work does look to me like it is grappling with confusions related to the problem of aiming minds in theory, and if their research succeeds according to their own lights then I would expect to have a better understanding of how to aim minds in general, even ones that had undergone some sort of “sharp left turn”.
Which is not to say that I’m optimistic about whether any of these plans will succeed by their own lights. Regardless, they get points for taking a swing, and the thing I’m mostly advocating for is that more people take swings at this problem at all, not that we filter strongly on my optimism about specific angles of attack.
I tried to solve the problem myself for a few years, and failed. Turns out I wasn’t all that good at it.
Maybe I’ll be able to do better next time, and I poke at it every so often. (Even though in my mainline prediction, we won’t have the time to complete the sort of research paths that I can see and that I think have any chance of working.)
MIRI funds or offers-to-fund most every researcher who I see as having this “their work would help with the generalization problem if they succeeded” property and as doing novel, nontrivial work, so it’s no coincidence that I feel more positive about Vanessa, etc.’s work. But I’d like to see far more attempts to solve this problem than the field is currently marshaling.
- ^
Again, to be clear, it’s nice to have some people trying to route around the hard problems wholesale. But I don’t count such attempts as attacks on the problem itself. (I’m also not optimistic about any attempts I have yet seen to dodge the problem, but that’s a digression from today’s topic.)
- ^
I couldn’t understand Stuart’s views from what he’s written publicly, so I ran this section by Stuart and Rebecca, who requested that I use actual quotes instead of my attempted paraphrasings. If I’d had more time, I’d like to have run all the dialogs by the researchers I mentioned in this post, and iterated until I could pass everyone’s ideological Turing Test, as opposed to the current awkward set-up where the people that I thought I understood didn’t get as much chance for feedback. But the time delay from editing this one section is evidence that this wouldn’t be worth the time burnt. Instead, I hope the comments can correct any mischaracterizations on my part.
- ^
Note also that while having the AI ask for clarification in the face of ambiguity is nice and helpful, it is of course far from autonomous-AGI-grade.
- ^
I specifically see:
~3 MIRI-supported research approaches that are trying to attack a chunk of the hard problem (with a caveat that I think the relevant chunks are too small and progress is too slow for this to increase humanity’s odds of success by much).
~1 other research approach that could maybe help address the core difficulty if it succeeds wildly more than I currently expect it to succeed (albeit no one is currently spending much time on this research approach): Natural Abstractions. Maybe 2, if you count sufficiently ambitious interpretability work.
~2 research approaches that mostly don’t help address the core difficulty (unless perhaps more ambitious versions of those proposals are developed, and the ambitious versions wildly succeed), but might provide small safety boosts on the mainline if other research addresses the core difficulty: Concept Extrapolation, and current interpretability work (with a caveat that sufficiently ambitious interpretability work would seem more promising to me than this).
9+ approaches that appear to me to be either assuming away what look to me like the key problems, or hoping that we can do other things that allow us to avoid facing the problem: Truthful AI, ELK, AI Services, Evan’s approach, the Richard/Rohin meta-approach, Vivek’s approach, Critch’s approach, superbabies, and the “maybe there is a pretty wide attractor basin around my own values” idea.
- ^
I rate “interpretability succeeds so wildly that we can understand and aim one of the first AGIs” as probably a bit more plausible than “natural abstractions are so natural that, by understanding them, we can practically find concepts-worth-optimizing-for in an AGI”. Both seem very unlikely to me, though they meet my bar for “deserving of a serious effort by humanity” in case they work out.
- (My understanding of) What Everyone in Technical Alignment is Doing and Why by 29 Aug 2022 1:23 UTC; 413 points) (
- Shallow review of live agendas in alignment & safety by 27 Nov 2023 11:10 UTC; 332 points) (
- A central AI alignment problem: capabilities generalization, and the sharp left turn by 15 Jun 2022 13:10 UTC; 282 points) (
- MIRI 2024 Mission and Strategy Update by 5 Jan 2024 0:20 UTC; 219 points) (
- If interpretability research goes well, it may get dangerous by 3 Apr 2023 21:48 UTC; 200 points) (
- A note about differential technological development by 15 Jul 2022 4:46 UTC; 196 points) (
- AGI ruin scenarios are likely (and disjunctive) by 27 Jul 2022 3:21 UTC; 175 points) (
- A rough and incomplete review of some of John Wentworth’s research by 28 Mar 2023 18:52 UTC; 175 points) (
- MIRI 2024 Mission and Strategy Update by 5 Jan 2024 1:10 UTC; 154 points) (EA Forum;
- I Would Have Solved Alignment, But I Was Worried That Would Advance Timelines by 20 Oct 2023 16:37 UTC; 119 points) (
- Towards Hodge-podge Alignment by 19 Dec 2022 20:12 UTC; 93 points) (
- An artificially structured argument for expecting AGI ruin by 7 May 2023 21:52 UTC; 91 points) (
- AI Safety Seems Hard to Measure by 11 Dec 2022 1:31 UTC; 90 points) (EA Forum;
- Threat Model Literature Review by 1 Nov 2022 11:03 UTC; 77 points) (
- What does it mean for an AGI to be ‘safe’? by 7 Oct 2022 4:13 UTC; 74 points) (
- AI Safety Seems Hard to Measure by 8 Dec 2022 19:50 UTC; 71 points) (
- Alignment Org Cheat Sheet by 20 Sep 2022 17:36 UTC; 70 points) (
- Abstracting The Hardness of Alignment: Unbounded Atomic Optimization by 29 Jul 2022 18:59 UTC; 68 points) (
- AI #2 by 2 Mar 2023 14:50 UTC; 66 points) (
- Reflections on my first year of AI safety research by 8 Jan 2024 7:49 UTC; 62 points) (EA Forum;
- A note about differential technological development by 24 Jul 2022 23:41 UTC; 58 points) (EA Forum;
- Thoughts on the OpenAI alignment plan: will AI research assistants be net-positive for AI existential risk? by 10 Mar 2023 8:21 UTC; 58 points) (
- The LessWrong 2022 Review: Review Phase by 22 Dec 2023 3:23 UTC; 58 points) (
- Voting Results for the 2022 Review by 2 Feb 2024 20:34 UTC; 57 points) (
- Internal independent review for language model agent alignment by 7 Jul 2023 6:54 UTC; 55 points) (
- Try to solve the hard parts of the alignment problem by 18 Mar 2023 14:55 UTC; 54 points) (
- What does it mean for an AGI to be ‘safe’? by 7 Oct 2022 4:43 UTC; 53 points) (EA Forum;
- AGI ruin scenarios are likely (and disjunctive) by 27 Jul 2022 3:24 UTC; 53 points) (EA Forum;
- 2022 (and All Time) Posts by Pingback Count by 16 Dec 2023 21:17 UTC; 53 points) (
- Reflections on my first year of AI safety research by 8 Jan 2024 7:49 UTC; 52 points) (
- An Exercise to Build Intuitions on AGI Risk by 7 Jun 2023 18:35 UTC; 52 points) (
- Ways to buy time by 12 Nov 2022 19:31 UTC; 47 points) (EA Forum;
- Monthly Overload of EA—August 2022 by 30 Jul 2022 14:07 UTC; 46 points) (EA Forum;
- Planning to build a cryptographic box with perfect secrecy by 31 Dec 2023 9:31 UTC; 40 points) (
- Refining the Sharp Left Turn threat model, part 2: applying alignment techniques by 25 Nov 2022 14:36 UTC; 39 points) (
- Highlights and Prizes from the 2021 Review Phase by 23 Jan 2023 21:41 UTC; 38 points) (
- The alignment stability problem by 26 Mar 2023 2:10 UTC; 35 points) (
- Recruit the World’s best for AGI Alignment by 30 Mar 2023 16:41 UTC; 34 points) (EA Forum;
- Ways to buy time by 12 Nov 2022 19:31 UTC; 34 points) (
- Could We Automate AI Alignment Research? by 10 Aug 2023 12:17 UTC; 34 points) (
- If interpretability research goes well, it may get dangerous by 3 Apr 2023 21:48 UTC; 33 points) (EA Forum;
- Podcast/video/transcript: Eliezer Yudkowsky—Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality by 9 Apr 2023 10:37 UTC; 32 points) (EA Forum;
- Can someone explain to me why most researchers think alignment is probably something that is humanly tractable? by 3 Sep 2022 1:12 UTC; 32 points) (
- Aligned AI as a wrapper around an LLM by 25 Mar 2023 15:58 UTC; 31 points) (
- What are MIRI’s big achievements in AI alignment? by 7 Mar 2023 21:30 UTC; 29 points) (
- A rough and incomplete review of some of John Wentworth’s research by 28 Mar 2023 18:52 UTC; 27 points) (EA Forum;
- General advice for transitioning into Theoretical AI Safety by 15 Sep 2022 5:23 UTC; 25 points) (EA Forum;
- Where are the red lines for AI? by 5 Aug 2022 9:34 UTC; 25 points) (
- My Alignment “Plan”: Avoid Strong Optimisation and Align Economy by 31 Jan 2024 17:03 UTC; 24 points) (
- The Alignment Problems by 12 Jan 2023 22:29 UTC; 20 points) (
- Disagreements about Alignment: Why, and how, we should try to solve them by 8 Aug 2022 22:32 UTC; 16 points) (EA Forum;
- Appendices to the live agendas by 27 Nov 2023 11:10 UTC; 16 points) (
- 25 Nov 2022 17:27 UTC; 15 points) 's comment on Refining the Sharp Left Turn threat model, part 2: applying alignment techniques by (
- Where are the red lines for AI? by 5 Aug 2022 9:41 UTC; 13 points) (EA Forum;
- Thoughts on the OpenAI alignment plan: will AI research assistants be net-positive for AI existential risk? by 10 Mar 2023 8:20 UTC; 12 points) (EA Forum;
- AXRP Episode 18 - Concept Extrapolation with Stuart Armstrong by 3 Sep 2022 23:12 UTC; 12 points) (
- General advice for transitioning into Theoretical AI Safety by 15 Sep 2022 5:23 UTC; 12 points) (
- 7 Dec 2022 22:41 UTC; 12 points) 's comment on Thoughts on AGI organizations and capabilities work by (
- Disagreements about Alignment: Why, and how, we should try to solve them by 9 Aug 2022 18:49 UTC; 11 points) (
- 12 Jul 2022 2:57 UTC; 11 points) 's comment on A central AI alignment problem: capabilities generalization, and the sharp left turn by (
- Please wonder about the hard parts of the alignment problem by 11 Jul 2023 17:02 UTC; 8 points) (EA Forum;
- Any further work on AI Safety Success Stories? by 2 Oct 2022 9:53 UTC; 7 points) (
- Countering arguments against working on AI safety by 20 Jul 2022 18:23 UTC; 7 points) (
- 11 Oct 2024 5:31 UTC; 5 points) 's comment on wassname’s Shortform by (
- An Exercise to Build Intuitions on AGI Risk by 8 Jun 2023 11:20 UTC; 4 points) (EA Forum;
- 22 Jul 2022 5:01 UTC; 4 points) 's comment on Internal Double Crux by (
- 4 Nov 2022 9:45 UTC; 3 points) 's comment on All AGI Safety questions welcome (especially basic ones) [~monthly thread] by (EA Forum;
- # **Announcement of AI-Plans.com Critique-a-thon September 2023** by 14 Sep 2023 17:43 UTC; 3 points) (
- 3 May 2023 14:11 UTC; 1 point) 's comment on AGI safety career advice by (EA Forum;
- 24 Nov 2022 10:01 UTC; 1 point) 's comment on Maximal lotteries for value learning by (
- Is there a benefit in low capability AI Alignment research? by 26 Aug 2022 23:51 UTC; 1 point) (
I really liked this post in that it seems to me to have tried quite seriously to engage with a bunch of other people’s research, in a way that I feel like is quite rare in the field, and something I would like to see more of.
One of the key challenges I see for the rationality/AI-Alignment/EA community is the difficulty of somehow building institutions that are not premised on the quality or tractability of their own work. My current best guess is that the field of AI Alignment has made very little progress in the last few years, which is really not what you might think when you observe the enormous amount of talent, funding and prestige flooding into the space, and the relatively constant refrain of “now that we have cutting edge systems to play around with we are making progress at an unprecedented rate”.
It is quite plausible to me that technical AI Alignment research is not a particularly valuable thing to be doing right now. I don’t think I have seen much progress, and the dynamics of the field seem to be enshrining an expert class that seems almost ontologically committed to believing that the things they are working on must be good and tractable, because their salary and social standing relies on believing that.
This and a few other similar posts last year are the kind of post that helped me come to understand the considerations around this crucial question better, and where I am grateful that Nate, despite having spent a lot of his life on solving the technical AI Alignment problem, is willing to question the tractability of the whole field. This specific post is more oriented around other people’s work, though other posts by Nate and Eliezer are also facing the degree to which their past work didn’t make the relevant progress they were hoping for.
I should acknowledge first that I understand that writing is hard. If the only realistic choice was between this post as it is, and no post at all, then I’m glad we got the post rather than no post.
That said, by the standards I hold my own writing to, I would embarrassed to publish a post like this which criticizes imaginary paraphrases of researchers, rather than citing and quoting the actual text they’ve actually published. (The post acknowledges this as a flaw, but if it were me, I wouldn’t even publish.) The reason I don’t think critics necessarily need to be able to pass an author’s Ideological Turing Test is because, as a critic, I can at least be scrupulous in my reasoning about the actual text that the author actually published, even if the stereotype of the author I have in my head is faulty. If I can’t produce the quotes to show that I’m not just arguing against a stereotype in my head, then it’s not clear why the audience should care.
This seems right to me for posts replying to individual authors/topics (and I think this criticism may apply to some other more targeted Nate posts in that vein)
But I think for giving his takes on a large breadth of people, the cost of making sure each section is well vetted increases the cost by a really prohibitive amount, and I think it’s probably better to do it the way Nate did here (clearly establishing the epistemic status of the post, and letting people in the comments argue if he got something wrong).
Also, curious if you think there’s a particular instance where someone(s) felt misrepresented here? (I just tried doing a skim of the comments, there were a lot of them and the first ones I saw seemed more like arguing with the substance of the disagreement rather than his characterization being wrong. I gave up kinda quickly, but for now, did you recall him getting something wrong here, or just thinking on general principle that one should’t err in this direction?)
Extremely important